240 research outputs found
Therapeutic effects of ulinastatin on postoperative complications and cognitive function in elderly patients with esophageal cancer after thoracic laparoscopic surgery
Purpose: To investigate the therapeutic effect of ulinastatin on postoperative complications and cognitive function in elderly patients with esophageal cancer after thoracic laparoscopic surgery.
Methods: A total of 100 elderly in-patients with esophageal cancer who had undergone thoracic laparoscopic surgery from April 2019 to December 2020 were selected and randomly assigned to control and study groups. Patients in control group received conventional treatment, while those in the study group were administered ulinastatin. The two groups were compared with respect to response, incidence of postoperative complications, Mini-Mental State Examination (MMSE) cognitive function score, Barthel Index (BI) scores; preoperative, intraoperative, 12-h and 24-h post-surgery levels of IL-1β and IL-6; levels of CD3+, CD4+ and CD8+, as well as duration of surgery and waking time.
Results: Response, MMSE score, BI index, and levels of CD3+, CD4+ and CD8+ in the study group were significantly higher than those in the control group (p < 0.05). Incidence of postoperative complications, and expression levels of IL-1β and IL-6 12 h and 24 h after surgery in the study group were lower than the corresponding control levels (p < 0.05). There were no significant differences in duration of operation and waking time between the two groups (p > 0.05).
Conclusion: Ulinastatin significantly reduces postoperative complications, and also improves cognitive function in elderly patients with esophageal cancer after thoracic laparoscopic surgery. This finding is of great significance in the treatment of these patients
T-COL: Generating Counterfactual Explanations for General User Preferences on Variable Machine Learning Systems
Machine learning (ML) based systems have been suffering a lack of
interpretability. To address this problem, counterfactual explanations (CEs)
have been proposed. CEs are unique as they provide workable suggestions to
users, in addition to explaining why a certain outcome was predicted. However,
the application of CEs has been hindered by two main challenges, namely general
user preferences and variable ML systems. User preferences, in particular, tend
to be general rather than specific feature values. Additionally, CEs need to be
customized to suit the variability of ML models, while also maintaining
robustness even when these validation models change. To overcome these
challenges, we propose several possible general user preferences that have been
validated by user research and map them to the properties of CEs. We also
introduce a new method called \uline{T}ree-based \uline{C}onditions
\uline{O}ptional \uline{L}inks (T-COL), which has two optional structures and
several groups of conditions for generating CEs that can be adapted to general
user preferences. Meanwhile, a group of conditions lead T-COL to generate more
robust CEs that have higher validity when the ML model is replaced. We compared
the properties of CEs generated by T-COL experimentally under different user
preferences and demonstrated that T-COL is better suited for accommodating user
preferences and variable ML systems compared to baseline methods including
Large Language Models
Evaluate What You Can't Evaluate: Unassessable Quality for Generated Response
LLMs (large language models) such as ChatGPT have shown remarkable language
understanding and generation capabilities. Although reference-free evaluators
based on LLMs show better human alignment than traditional reference-based
evaluators, there are many challenges in using reference-free evaluators based
on LLMs. Reference-free evaluators are more suitable for open-ended examples
with different semantics responses. But not all examples are open-ended. For
closed-ended examples with unique correct semantic response, reference-free
evaluators will still consider it high quality when giving a response that is
inconsistent with the facts and the semantic of reference. In order to
comprehensively evaluate the reliability of evaluators based on LLMs, we
construct two adversarial meta-evaluation dialogue generation datasets
KdConv-ADV and DSTC7-ADV based on KdConv and DSTC7-AVSD, respectively. Compared
to previous meta-evaluation benchmarks, KdConv-ADV and DSTC7-ADV are much more
challenging since they requires evaluators to be able to reasonably evaluate
closed-ended examples with the help of external knowledge or even its own
knowledge. Empirical results show that the ability of LLMs to identify
unreasonable responses is insufficient. There are risks in using eference-free
evaluators based on LLMs to evaluate the quality of dialogue responses.Comment: preprin
Alleviating Sparsity of Open Knowledge Graphs with Ternary Contrastive Learning
Sparsity of formal knowledge and roughness of non-ontological construction
make sparsity problem particularly prominent in Open Knowledge Graphs
(OpenKGs). Due to sparse links, learning effective representation for few-shot
entities becomes difficult. We hypothesize that by introducing negative
samples, a contrastive learning (CL) formulation could be beneficial in such
scenarios. However, existing CL methods model KG triplets as binary objects of
entities ignoring the relation-guided ternary propagation patterns and they are
too generic, i.e., they ignore zero-shot, few-shot and synonymity problems that
appear in OpenKGs. To address this, we propose TernaryCL, a CL framework based
on ternary propagation patterns among head, relation and tail. TernaryCL
designs Contrastive Entity and Contrastive Relation to mine ternary
discriminative features with both negative entities and relations, introduces
Contrastive Self to help zero- and few-shot entities learn discriminative
features, Contrastive Synonym to model synonymous entities, and Contrastive
Fusion to aggregate graph features from multiple paths. Extensive experiments
on benchmarks demonstrate the superiority of TernaryCL over state-of-the-art
models
A Graph Reasoning Network for Multi-turn Response Selection via Customized Pre-training
We investigate response selection for multi-turn conversation in
retrieval-based chatbots. Existing studies pay more attention to the matching
between utterances and responses by calculating the matching score based on
learned features, leading to insufficient model reasoning ability. In this
paper, we propose a graph-reasoning network (GRN) to address the problem. GRN
first conducts pre-training based on ALBERT using next utterance prediction and
utterance order prediction tasks specifically devised for response selection.
These two customized pre-training tasks can endow our model with the ability of
capturing semantical and chronological dependency between utterances. We then
fine-tune the model on an integrated network with sequence reasoning and graph
reasoning structures. The sequence reasoning module conducts inference based on
the highly summarized context vector of utterance-response pairs from the
global perspective. The graph reasoning module conducts the reasoning on the
utterance-level graph neural network from the local perspective. Experiments on
two conversational reasoning datasets show that our model can dramatically
outperform the strong baseline methods and can achieve performance which is
close to human-level.Comment: Accepted by AAAI 2021;10 pages,6 figure
Personalized sentiment classification based on latent individuality of microblog users
Sentiment expression in microblog posts often re-flects user’s specific individuality due to different language habit, personal character, opinion bias and so on. Existing sentiment classification algo-rithms largely ignore such latent personal distinc-tions among different microblog users. Meanwhile, sentiment data of microblogs are sparse for indi-vidual users, making it infeasible to learn effective personalized classifier. In this paper, we propose a novel, extensible personalized sentiment classi-fication method based on a variant of latent fac-tor model to capture personal sentiment variations by mapping users and posts into a low-dimensional factor space. We alleviate the sparsity of personal texts by decomposing the posts into words which are further represented by the weighted sentiment and topic units based on a set of syntactic units of words obtained from dependency parsing results. To strengthen the representation of users, we lever-age users following relation to consolidate the in-dividuality of a user fused from other users with similar interests. Results on real-world microblog datasets confirm that our method outperforms state-of-the-art baseline algorithms with large margins.
- …